Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 86
Filtrar
1.
bioRxiv ; 2023 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-37986779

RESUMO

The primary visual cortex (V1) in individuals born blind is engaged in a wide spectrum of tasks and sensory modalities, including audition, touch, language, and memory. This widespread involvement raises questions regarding the constancy of its role and whether it might exhibit flexibility in its function over time, connecting to diverse network functions in response to task-specific demands. This would suggest that reorganized V1 takes on a role similar to cognitive multiple-demand system regions. Alternatively, it is possible that the varying patterns of plasticity observed in the blind V1 can be attributed to individual factors, whereby different blind individuals recruit V1 for different functions, highlighting the immense idiosyncrasy of plasticity. In support of this second account, we have recently shown that V1 functional connectivity varies greatly across blind individuals. But do these represent stable individual patterns of plasticity or merely instantaneous changes, for a multiple-demand system now inhabiting V1? Here we tested if individual connectivity patterns from the visual cortex of blind individuals are stable over time. We show that over two years, fMRI functional connectivity from the primary visual cortex is unique and highly stable in a small sample of repeatedly sampled congenitally blind individuals. Further, using multivoxel pattern analysis, we demonstrate that the unique reorganization patterns of these individuals allow decoding of participant identity. Together with recent evidence for substantial individual differences in visual cortex connectivity, this indicates there may be a consistent role for the visual cortex in blindness, which may differ for each individual. Further, it suggests that the variability in visual reorganization in blindness across individuals could be used to seek stable neuromarkers for sight rehabilitation and assistive approaches.

2.
J Neurosci ; 43(46): 7868-7878, 2023 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-37783506

RESUMO

Motor actions, such as reaching or grasping, can be decoded from fMRI activity of early visual cortex (EVC) in sighted humans. This effect can depend on vision or visual imagery, or alternatively, could be driven by mechanisms independent of visual experience. Here, we show that the actions of reaching in different directions can be reliably decoded from fMRI activity of EVC in congenitally blind humans (both sexes). Thus, neither visual experience nor visual imagery is necessary for EVC to represent action-related information. We also demonstrate that, within EVC of blind humans, the accuracy of reach direction decoding is highest in areas typically representing foveal vision and gradually decreases in areas typically representing peripheral vision. We propose that this might indicate the existence of a predictive, hard-wired mechanism of aligning action and visual spaces. This mechanism might send action-related information primarily to the high-resolution foveal visual areas, which are critical for guiding and online correction of motor actions. Finally, we show that, beyond EVC, the decoding of reach direction in blind humans is most accurate in dorsal stream areas known to be critical for visuo-spatial and visuo-motor integration in the sighted. Thus, these areas can develop space and action representations even in the lifelong absence of vision. Overall, our findings in congenitally blind humans match previous research on the action system in the sighted, and suggest that the development of action representations in the human brain might be largely independent of visual experience.SIGNIFICANCE STATEMENT Early visual cortex (EVC) was traditionally thought to process only visual signals from the retina. Recent studies proved this account incomplete, and showed EVC involvement in many activities not directly related to incoming visual information, such as memory, sound, or action processing. Is EVC involved in these activities because of visual imagery? Here, we show robust reach direction representation in EVC of humans born blind. This demonstrates that EVC can represent actions independently of vision and visual imagery. Beyond EVC, we found that reach direction representation in blind humans is strongest in dorsal brain areas, critical for action processing in the sighted. This suggests that the development of action representations in the human brain is largely independent of visual experience.


Assuntos
Córtex Visual , Percepção Visual , Masculino , Feminino , Humanos , Encéfalo , Córtex Visual/diagnóstico por imagem , Mapeamento Encefálico , Cegueira , Imageamento por Ressonância Magnética
3.
Restor Neurol Neurosci ; 41(3-4): 115-127, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37742669

RESUMO

BACKGROUND: The default mode network (DMN) is a large-scale brain network tightly correlated with self and self-referential processing, activated by intrinsic tasks and deactivated by externally-directed tasks. OBJECTIVE: In this study, we aim to investigate the novel approach of default mode activation during progressive muscle relaxation and examine whether differential activation patterns result from the movement of different body parts. METHODS: We employed neuroimaging to investigate DMN activity during simple body movements, while performing progressive muscle relaxation. We focused on differentiating the neural response between facial movements and movements of other body parts. RESULTS: Our results show that the movement of different body parts led to deactivation in several DMN nodes, namely the temporal poles, hippocampus, medial prefrontal cortex (mPFC), and posterior cingulate cortex. However, facial movement induced an inverted and selective positive BOLD pattern in some of these areas precisely. Moreover, areas in the temporal poles selective for face movement showed functional connectivity not only with the hippocampus and mPFC but also with the nucleus accumbens. CONCLUSIONS: Our findings suggest that both conceptual and embodied self-related processes, including body movements during progressive muscle relaxation, may be mapped onto shared brain networks. This could enhance our understanding of how practices like PMR influence DMN activity and potentially offer insights to inform therapeutic strategies that rely on mindful body movements.


Assuntos
Mapeamento Encefálico , Rede de Modo Padrão , Encéfalo/fisiologia , Giro do Cíngulo , Hipocampo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiologia
4.
Neuropsychologia ; 190: 108685, 2023 Nov 05.
Artigo em Inglês | MEDLINE | ID: mdl-37741551

RESUMO

Accumulating evidence in the last decades has given rise to a new theory of brain organization, positing that cortical regions are recruited for specific tasks irrespective of the sensory modality via which information is channeled. For instance, the visual reading network has been shown to be recruited for reading via the tactile Braille code in congenitally blind adults. Yet, how rapidly non-typical sensory input modulates activity in typically visual regions is yet to be explored. To this aim, we developed a novel reading orthography, termed OVAL, enabling congenitally blind adults to quickly acquire reading via the auditory modality. OVAL uses the EyeMusic, a visual-to-auditory sensory-substitution-device (SSD) to transform visually presented letters optimized for auditory transformation into sound. Using fMRI, we show modulation in the right ventral visual stream following 2-h of same-day training. Crucially, following more extensive training (i.e., ∼12 h) we show that OVAL reading recruits the left ventral visual stream including the location of the Visual Word Form Area, a key graphene-responsive region within the visual reading network. Our results show that while after 2 h of SSD training we can already observe the recruitment of the deprived ventral visual stream by auditory stimuli, computation-selective cross-modal recruitment requires longer training to establish.


Assuntos
Encéfalo , Aprendizagem , Adulto , Humanos , Tato , Mapeamento Encefálico , Som , Imageamento por Ressonância Magnética , Cegueira
5.
PLoS One ; 18(6): e0287802, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37352216

RESUMO

[This corrects the article DOI: 10.1371/journal.pone.0250281.].

6.
Curr Biol ; 33(7): 1211-1219.e5, 2023 04 10.
Artigo em Inglês | MEDLINE | ID: mdl-36863342

RESUMO

V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We performed two fMRI experiments on two independent datasets. In the first experiment, CB and sighted participants navigated the same mazes. The sighted performed the mazes via vision, while the CB performed them via audition. The CB performed the mazes before and after a training session, using the EyeCane SSD. In the second experiment, a group of sighted participants performed a motor topography task. Our results show that right V6 (rhV6) is selectively involved in egocentric navigation independently of the sensory modality used. Indeed, after training, rhV6 of CB is selectively recruited for auditory navigation, similarly to rhV6 in the sighted. Moreover, we found activation for body movement in area V6, which can putatively contribute to its involvement in egocentric navigation. Taken together, our findings suggest that area rhV6 is a unique hub that transforms spatially relevant sensory information into an egocentric representation for navigation. While vision is clearly the dominant modality, rhV6 is in fact a supramodal area that can develop its selectivity for navigation in the absence of visual experience.


Assuntos
Percepção Auditiva , Movimento , Humanos , Movimento/fisiologia , Movimento (Física) , Audição , Movimentos Oculares
7.
Front Neurosci ; 17: 973525, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36968509

RESUMO

The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA's perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA's connectivity profile in a counterintuitive way-functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.

8.
Front Hum Neurosci ; 17: 1058617, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36936618

RESUMO

Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.

9.
Front Neurosci ; 16: 970878, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36440286

RESUMO

Greater cortical gyrification (GY) is linked with enhanced cognitive abilities and is also negatively related to cortical thickness (CT). Individuals who are congenitally blind (CB) exhibits remarkable functional brain plasticity which enables them to perform certain non-visual and cognitive tasks with supranormal abilities. For instance, extensive training using touch and audition enables CB people to develop impressive skills and there is evidence linking these skills to cross-modal activations of primary visual areas. There is a cascade of anatomical, morphometric and functional-connectivity changes in non-visual structures, volumetric reductions in several components of the visual system, and CT is also increased in CB. No study to date has explored GY changes in this population, and no study has explored how variations in CT are related to GY changes in CB. T1-weighted 3D structural magnetic resonance imaging scans were acquired to examine the effects of congenital visual deprivation in cortical structures in a healthy sample of 11 CB individuals (6 male) and 16 age-matched sighted controls (SC) (10 male). In this report, we show for the first time an increase in GY in several brain areas of CB individuals compared to SC, and a negative relationship between GY and CT in the CB brain in several different cortical areas. We discuss the implications of our findings and the contributions of developmental factors and synaptogenesis to the relationship between CT and GY in CB individuals compared to SC. F.

10.
Front Neurosci ; 16: 921321, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36263367

RESUMO

Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.

11.
Neuropsychologia ; 173: 108305, 2022 08 13.
Artigo em Inglês | MEDLINE | ID: mdl-35752268

RESUMO

The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.


Assuntos
Próteses Visuais , Adulto , Cegueira/cirurgia , Humanos , Masculino , Fosfenos , Transtornos da Visão
12.
Sci Rep ; 12(1): 4330, 2022 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-35288597

RESUMO

Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.


Assuntos
Percepção Auditiva , Percepção Visual , Adulto , Cegueira , Cabeça , Humanos , Aprendizagem
13.
Front Hum Neurosci ; 16: 1058093, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36776219

RESUMO

Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.

14.
Front Neurosci ; 16: 962817, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36711132

RESUMO

As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).

15.
Front Hum Neurosci ; 15: 713931, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34803631

RESUMO

Manipulating sensory and motor cues can cause an illusionary perception of ownership of a fake body part. Presumably, the illusion can work as long as the false body part's position and appearance are anatomically plausible. Here, we introduce an illusion that challenges past assumptions on body ownership. We used virtual reality to switch and mirror participants' views of their hands. When a participant moves their physical hand, they see the incongruent virtual hand moving. The result is an anatomically implausible configuration of the fake hand. Despite the hand switch, participants reported significant body ownership sensations over the virtual hands. In the first between-group experiment, we found that the strength of body ownership over the incongruent hands was similar to that of congruent hands. Whereas, in the second within-group experiment, anatomical incongruency significantly decreased body ownership. Still, participants reported significant body ownership sensations of the switched hands. Curiously, we found that perceived levels of agency mediate the effect of anatomical congruency on body ownership. These findings offer a fresh perspective on the relationship between anatomical plausibility and assumed body ownership. We propose that goal-directed and purposeful actions can override anatomical plausibility constraints and discuss this in the context of the immersive properties of virtual reality.

16.
Sci Rep ; 11(1): 11944, 2021 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-34099756

RESUMO

Can humans extend and augment their natural perceptions during adulthood? Here, we address this fascinating question by investigating the extent to which it is possible to successfully augment visual spatial perception to include the backward spatial field (a region where humans are naturally blind) via other sensory modalities (i.e., audition). We thus developed a sensory-substitution algorithm, the "Topo-Speech" which conveys identity of objects through language, and their exact locations via vocal-sound manipulations, namely two key features of visual spatial perception. Using two different groups of blindfolded sighted participants, we tested the efficacy of this algorithm to successfully convey location of objects in the forward or backward spatial fields following ~ 10 min of training. Results showed that blindfolded sighted adults successfully used the Topo-Speech to locate objects on a 3 × 3 grid either positioned in front of them (forward condition), or behind their back (backward condition). Crucially, performances in the two conditions were entirely comparable. This suggests that novel spatial sensory information conveyed via our existing sensory systems can be successfully encoded to extend/augment human perceptions. The implications of these results are discussed in relation to spatial perception, sensory augmentation and sensory rehabilitation.


Assuntos
Algoritmos , Percepção Auditiva/fisiologia , Percepção Espacial/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adaptação Fisiológica/fisiologia , Adulto , Cegueira/fisiopatologia , Feminino , Humanos , Masculino , Desempenho Psicomotor/fisiologia , Córtex Visual/fisiologia , Adulto Jovem
18.
Sci Rep ; 11(1): 10636, 2021 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-34017027

RESUMO

Perceiving the spatial location and physical dimensions of touched objects is crucial for goal-directed actions. To achieve this, our brain transforms skin-based coordinates into a reference frame by integrating visual and posture information. In the current study, we examine the role of posture in mapping tactile sensations to a visual image. We developed a new visual-to-touch sensory substitution device that transforms images into a sequence of vibrations on the arm. 52 blindfolded participants performed spatial recognition tasks in three different arm postures and had to switch postures between trial blocks. As participants were not told which side of the device is down and which is up, they could choose how to map its vertical axis in their responses. Contrary to previous findings, we show that new proprioceptive inputs can be overridden in mapping tactile sensations. We discuss the results within the context of the spatial task and the various sensory contributions to the process.


Assuntos
Percepção do Tato/fisiologia , Tato/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Postura/fisiologia
19.
Cognition ; 212: 104716, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33895652

RESUMO

Geometrical intuitions spontaneously drive visuo-spatial reasoning in human adults, children and animals. Is their emergence intrinsically linked to visual experience, or does it reflect a core property of cognition shared across sensory modalities? To address this question, we tested the sensitivity of blind-from-birth adults to geometrical-invariants using a haptic deviant-figure detection task. Blind participants spontaneously used many geometric concepts such as parallelism, right angles and geometrical shapes to detect intruders in haptic displays, but experienced difficulties with symmetry and complex spatial transformations. Across items, their performance was highly correlated with that of sighted adults performing the same task in touch (blindfolded) and in vision, as well as with the performances of uneducated preschoolers and Amazonian adults. Our results support the existence of an amodal core-system of geometry that arises independently of visual experience. However, performance at selecting geometric intruders was generally higher in the visual compared to the haptic modality, suggesting that sensory-specific spatial experience may play a role in refining the properties of this core-system of geometry.


Assuntos
Percepção do Tato , Adulto , Cegueira , Criança , Humanos , Conhecimento , Matemática , Tato , Visão Ocular
20.
PLoS One ; 16(4): e0250281, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33905446

RESUMO

Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects' identification was significantly above chance, highlighting the algorithm's intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.


Assuntos
Algoritmos , Aprendizagem/fisiologia , Auxiliares Sensoriais , Pessoas com Deficiência Visual/reabilitação , Dispositivos Eletrônicos Vestíveis , Estimulação Acústica/instrumentação , Estimulação Acústica/métodos , Adulto , Percepção Auditiva/fisiologia , Biorretroalimentação Psicológica , Feminino , Voluntários Saudáveis , Humanos , Masculino , Percepção do Tato/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA